Trust in technology is an emerging research domain that examines trust in the technology artifact instead of trust in people. Although previous research finds that trust in technology can predict important outcomes, little research has examined the effect of unmet trust in technology expectations on trusting intentions. Furthermore, both trust and expectation disconfirmation theories suggest that trust disconfirmation effects may be more complex than the linear expectation disconfirmation model depicts. However, this complexity may only exist under certain contextual conditions. The current study contributes to this literature by introducing a nonlinear expectation disconfirmation theory model that extends understanding of trust-in-technology expectations and disconfirmation. Not only does the model include both technology trust expectations and technology trusting intention, it also introduces the concept of expectation maturity as a contextual factor. We collected data from three technology usage contexts that differ in expectation maturity, which we operationalize as length of the introductory period. We find that the situation, in terms of expectation maturity, consistently matters. Using polynomial regression and response surface analyses, we find that in contexts with a longer introductory period (i.e., higher expectation maturity), disconfirmation has a nonlinear relationship with trusting intention. When the introductory period is shorter (i.e., expectation maturity is lower), disconfirmation has a linear relationship with trusting intention. This unique set of empirical findings shows when it is valuable to use nonlinear modeling for understanding technology trust disconfirmation. We conclude with implications for future research.
Phishing is a major threat to individuals and organizations. Along with billions of dollars lost annually, phishing attacks have led to significant data breaches, loss of corporate secrets, and espionage. Despite the significant threat, potential phishing targets have little theoretical or practical guidance on which phishing tactics are most dangerous and require heightened caution. The current study extends persuasion and motivation theory to postulate why certain influence techniques are especially dangerous when used in phishing attacks. We evaluated our hypotheses using a large field experiment that involved sending phishing messages to more than 2,600 participants. Results indicated a disparity in levels of danger presented by different influence techniques used in phishing attacks. Specifically, participants were less vulnerable to phishing influence techniques that relied on fictitious prior shared experience and were more vulnerable to techniques offering a high level of self-determination. By extending persuasion and motivation theory to explain the relative efficacy of phishers' influence techniques, this work clarifies significant vulnerabilities and lays the foundation for individuals and organizations to combat phishing through awareness and training efforts.
Recent work, in journals such as MIS Quarterly and Management Science, has highlighted the importance of evaluating the influence of common method bias (CMB) on the results of statistical analysis. In this research note, we assess the utility of the unmeasured latent method construct (ULMC) approach in partial least squares (PLS), introduced by Liang et al. (2007). Such an assessment of the ULMC approach is important, because it has been employed in 76 studies since it appeared in MIS Quarterly in early 2007. Using data generated via Monte Carlo simulations, we use PLS structural equation modeling (SEM) to demonstrate that the ULMC approach of Liang et al. is neither able to detect, nor control for, common method bias. Method estimates using this approach resulted in negligible estimates, regardless of whether there were some, large, or no method bias introduced in the simulated data. Our study contributes to the IS and research methods literature by illustrating that, and explaining why the ULMC approach does not accurately detect common method bias in PLS. Further, our results build on prior work done using covariance-based SEM questioning the usefulness of the ULMC technique for detecting CMB.
The article presents the results of a field study which investigated the type of internet fraud known as phishing, focusing on factors which cause people to be more or less susceptible to being deceived in this way. The study participants received a phishing e-mail (electronic mail) message asking for information which they were not supposed to divulge. Four behavioral factors were identified as correlating with a tendency to be duped by the phishing message. Suggestions are provided for antiphishing measures based on this data, centering on computer user education.